通过使用图像级分类掩模监督其学习过程,弱监督对象本地化(WSOL)放宽对对象本地化的密度注释的要求。然而,当前的WSOL方法遭受背景位置的过度激活,并且需要后处理以获得定位掩模。本文将这些问题归因于背景提示的不明显,并提出了背景感知分类激活映射(B-CAM),以便仅使用图像级标签同时学习对象和背景的本地化分数。在我们的B-CAM中,两个图像级功能,由潜在背景和对象位置的像素级别功能聚合,用于从对象相关的背景中净化对象功能,并表示纯背景样本的功能,分别。然后基于这两个特征,学习对象分类器和背景分类器,以确定二进制对象本地化掩码。我们的B-CAM可以基于提出的错开分类损失以端到端的方式培训,这不仅可以改善对象本地化,而且还抑制了背景激活。实验表明,我们的B-CAM在Cub-200,OpenImages和VOC2012数据集上优于一级WSOL方法。
translated by 谷歌翻译
目的:深度神经网络(DNN)已被广泛应用于医学图像分类中,从其在医学图像中的强大映射能力中受益。但是,这些现有的基于深度学习的方法取决于大量精心标记的图像。同时,标记过程中不可避免地引入噪声,从而降低了模型的性能。因此,制定强大的培训策略以减轻医学图像分类任务中的标签噪声是很重要的。方法:在这项工作中,我们提出了一种新颖的贝叶斯统计数据指导标签翻新机制(BLRM),以防止过度适合嘈杂的图像。 BLRM利用贝叶斯统计数据和指定时间加权技术中的最大后验概率(MAP)来选择性地纠正嘈杂图像的标签。激活BLRM时,训练时期逐渐纯化训练图像,从而进一步改善分类性能。结果:关于合成噪声图像(公共OCT和Messidor数据集)和现实世界嘈杂图像(Animal-10N)的全面实验表明,BLRM选择性地翻新了噪声标签,从而凝结了噪声数据的不良影响。同样,与DNN集成的抗噪声BLRM在不同的噪声比下有效,并且独立于骨干DNN架构。此外,BLRM优于抗噪声的最新比较方法。结论:这些研究表明,所提出的BLRM能够缓解医学图像分类任务中的标签噪声。
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译
Long document retrieval aims to fetch query-relevant documents from a large-scale collection, where knowledge distillation has become de facto to improve a retriever by mimicking a heterogeneous yet powerful cross-encoder. However, in contrast to passages or sentences, retrieval on long documents suffers from the scope hypothesis that a long document may cover multiple topics. This maximizes their structure heterogeneity and poses a granular-mismatch issue, leading to an inferior distillation efficacy. In this work, we propose a new learning framework, fine-grained distillation (FGD), for long-document retrievers. While preserving the conventional dense retrieval paradigm, it first produces global-consistent representations crossing different fine granularity and then applies multi-granular aligned distillation merely during training. In experiments, we evaluate our framework on two long-document retrieval benchmarks, which show state-of-the-art performance.
translated by 谷歌翻译
To improve the performance of the dual-encoder retriever, one effective approach is knowledge distillation from the cross-encoder ranker. Existing works construct the candidate passages following the supervised learning setting where a query is paired with a positive passage and a batch of negatives. However, through empirical observation, we find that even the hard negatives from advanced methods are still too trivial for the teacher to distinguish, preventing the teacher from transferring abundant dark knowledge to the student through its soft label. To alleviate this issue, we propose ADAM, a knowledge distillation framework that can better transfer the dark knowledge held in the teacher with Adaptive Dark exAMples. Different from previous works that only rely on one positive and hard negatives as candidate passages, we create dark examples that all have moderate relevance to the query through mixing-up and masking in discrete space. Furthermore, as the quality of knowledge held in different training instances varies as measured by the teacher's confidence score, we propose a self-paced distillation strategy that adaptively concentrates on a subset of high-quality instances to conduct our dark-example-based knowledge distillation to help the student learn better. We conduct experiments on two widely-used benchmarks and verify the effectiveness of our method.
translated by 谷歌翻译
Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP's performance on new tasks. In this work, we identify and explore the problem of \emph{adapting large-scale models for zero-shot adversarial robustness}. We first identify two key factors during model adaption -- training losses and adaptation methods -- that affect the model's zero-shot adversarial robustness. We then propose a text-guided contrastive adversarial training loss, which aligns the text embeddings and the adversarial visual features with contrastive learning on a small set of training data. We apply this training loss to two adaption methods, model finetuning and visual prompt tuning. We find that visual prompt tuning is more effective in the absence of texts, while finetuning wins in the existence of text guidance. Overall, our approach significantly improves the zero-shot adversarial robustness over CLIP, seeing an average improvement of over 31 points over ImageNet and 15 zero-shot datasets. We hope this work can shed light on understanding the zero-shot adversarial robustness of large-scale models.
translated by 谷歌翻译
The fifth generation of the Radio Access Network (RAN) has brought new services, technologies, and paradigms with the corresponding societal benefits. However, the energy consumption of 5G networks is today a concern. In recent years, the design of new methods for decreasing the RAN power consumption has attracted interest from both the research community and standardization bodies, and many energy savings solutions have been proposed. However, there is still a need to understand the power consumption behavior of state-ofthe-art base station architectures, such as multi-carrier active antenna units (AAUs), as well as the impact of different network parameters. In this paper, we present a power consumption model for 5G AAUs based on artificial neural networks. We demonstrate that this model achieves good estimation performance, and it is able to capture the benefits of energy saving when dealing with the complexity of multi-carrier base stations architectures. Importantly, multiple experiments are carried out to show the advantage of designing a general model able to capture the power consumption behaviors of different types of AAUs. Finally, we provide an analysis of the model scalability and the training data requirements.
translated by 谷歌翻译
This paper presents the ARCAD simulator for the rapid development of Unmanned Aerial Systems (UAS), including underactuated and fully-actuated multirotors, fixed-wing aircraft, and Vertical Take-Off and Landing (VTOL) hybrid vehicles. The simulator is designed to accelerate these aircraft's modeling and control design. It provides various analyses of the design and operation, such as wrench-set computation, controller response, and flight optimization. In addition to simulating free flight, it can simulate the physical interaction of the aircraft with its environment. The simulator is written in MATLAB to allow rapid prototyping and is capable of generating graphical visualization of the aircraft and the environment in addition to generating the desired plots. It has been used to develop several real-world multirotor and VTOL applications. The source code is available at https://github.com/keipour/aircraft-simulator-matlab.
translated by 谷歌翻译